1,333 research outputs found

    Mechanisms of motor learning: by humans, for robots

    No full text
    Whenever we perform a movement and interact with objects in our environment, our central nervous system (CNS) adapts and controls the redundant system of muscles actuating our limbs to produce suitable forces and impedance for the interaction. As modern robots are increasingly used to interact with objects, humans and other robots, they too require to continuously adapt the interaction forces and impedance to the situation. This thesis investigated the motor mechanisms in humans through a series of technical developments and experiments, and utilized the result to implement biomimetic motor behaviours on a robot. Original tools were first developed, which enabled two novel motor imaging experiments using functional magnetic resonance imaging (fMRI). The first experiment investigated the neural correlates of force and impedance control to understand the control structure employed by the human brain. The second experiment developed a regressor free technique to detect dynamic changes in brain activations during learning, and applied this technique to investigate changes in neural activity during adaptation to force fields and visuomotor rotations. In parallel, a psychophysical experiment investigated motor optimization in humans in a task characterized by multiple error-effort optima. Finally a computational model derived from some of these results was implemented to exhibit human like control and adaptation of force, impedance and movement trajectory in a robot

    Host and Bacterial Determinants of Staphylococcus aureus Nasal Colonization in Humans

    Get PDF
    Staphylococcus aureus (SA), an opportunistic pathogen colonizing the anterior nares in approximately 30% of the human population, causes severe hospital-associated and community-acquired infections. SA nasal carriage plays a critical role in the pathogenesis of staphylococcal infections and SA eradication from the nares has proven to be effective in reducing endogenous infections. To understand SA nasal colonization and its relation with consequent disease, assessment of nasal carriage dynamics among a diverse population and determining factors responsible for SA nasal carriage have become major imperatives. Here, we report on an extensive longitudinal monitoring of SA nasal carriage in 109 healthy individuals over a period of up to three years to assess nasal carriage dynamics. Phylogenetic analyses of SA housekeeping genes and hypervariable virulence genes revealed that not only were SA strains colonizing intermittent and persistent nasal carriers genetically similar, but no preferential colonization of specific SA strains in these carriers was observed over time. These results indicated that other non-SA factors could be involved in determining specific carriage states. Therefore, to elucidate host responses during SA nasal carriage, we performed human SA nasal recolonization in a subset of SA nasal carriers within our cohort. In these studies, SA colonization levels were determined, and nasal secretions were collected and analyzed for host immune factors responsible for SA nasal carriage. Interestingly, we observed that stimulation of host immune responses lead to clearance of SA while sustained SA colonization was observed in hosts that did not mount a response during carriage. Further, analysis of nasal secretions from hosts revealed that proinflammatory cytokines and chemokines were significantly induced during SA nasal clearance suggesting that innate immune effectors influence carriage. SA utilizes a repertoire of surface and secreted proteins to evade host immune response and successfully colonize the nose. Analysis of the most abundant immunoevasive proteins in the exoproteome of SA nasal carrier strains revealed that expression levels of Staphylococcal protein A (SPA) produced by SA nasal carrier strains in vitro corresponded to the level of persistence of SA in the human nose. To determine if SPA is involved in modulating the host\u27s response to SA colonization, a subset of participants in our cohort was nasally recolonized with equal concentrations of both wild-type (WT) and spa-disrupted (?spa) autologous strains of SA. Interestingly, ?spa strains were eliminated from the nares significantly faster than WT when the host mounted an immune response, suggesting that the immunoevasive role of SPA is a determinant of carriage persistence. Collectively, this report augments our understanding of SA nasal carriage dynamics, in addition to identifying important host and microbial determinants that influence SA nasal colonization in humans. Better understanding of this phenomenon can lead to improved preventative strategies to thwart carriage-associated SA infections

    Integrated reliable and robust design

    Get PDF
    The objective of this research is to develop an integrated design methodology for reliability and robustness. Reliability-based design (RBD) and robust design (RD) are important to obtain optimal design characterized by low probability of failure and minimum performance variations respectively. But performing both RBD and RD in a product design may be conflicting and time consuming. An integrated design model is needed to achieve both reliability and robustness simultaneously. The purpose of this thesis is to integrate reliability and robustness. To achieve this objective, we first study the general relationship between reliability and robustness. Then we perform a numerical study on the relationship between reliability and robustness, by combining the reliability based design, robust design, multi objective optimization and Taguchi\u27s quality loss function to formulate an integrated design model. This integrated model gives reliable and robust optimum design values by minimizing the probability of failure and quality loss function of the design simultaneously. Based on the results from the numerical study, we propose a generalized quality loss function that considers both the safe region and the failure region. Taguchi\u27s quality loss function defines quality loss in the safe design region and risk function defines quality loss in the failure region. This integrated model achieves reliability and robustness by minimizing the general quality loss function of the design. Example problems show that this methodology is computationally efficient compared to the other optimization models. Results from the various examples suggest that this method can be efficiently used to minimize the probability of failure and the total quality loss of a design simultaneously --Abstract, page iii

    Improvement of mechanical properties and water stability of vegetable protein based plastics

    Get PDF
    Bio-renewable bio-degradable plastics are a potential solution to the growing problems of pollution caused by petroleum plastics and dependency on foreign nations for petroleum resources. One possible feed stock for these materials are vegetable proteins, especially from soy bean and corn. These proteins have relatively high molecular weights and have the potential of being processed with standard polymer processing technologies. But some issues that need to be addressed are their water instability (soy protein) and inferior mechanical properties as compared to petroleum derived plastics. In this study, soy protein isolates (SPI) and zein protein was processed with various additives and different process variables to improve their mechanical and water absorption properties.;SPI a food grade protein isolate extracted (90% protein) from soybeans was mixed with solvents such as water and glycerol and preservative salts to form the base resin. The resin was extruded in its control composition as well as with additives such as zinc stearate, zinc sulfite and blended with poly-epsilon caprolactone (PCL) to obtain pellets of five different compositions. The extrudate was pelletized and injection molded into ASTM dog-bone samples, which were used for characterization. The results indicated that the blends with PCL were relatively water stable. Thermocycling of control composition at 100°C improved the tensile strength significantly.;Zein an alcohol soluble protein from corn endosperm was casted into films after dissolution in solvents (ethanol) and addition of additives and/or plasticizers. The control formulation based on screening experiments was varied with the addition of different percentages of nanoclay. The effect of nanoclay exfoliation by ultrasonics on zein cast sheets was investigated. The results indicated that the control formulation had better mechanical properties but addition of nanoclays improved the water absorption properties in the films

    Error Correction Using Probabilistic Language Models

    Get PDF
    Error Correction has applications in a variety of domains given the prevalence of errors of various kinds and the need to programmatically correct them as accurately as possible. For example, error correction is used in portable mobile devices to fix typographical errors while taking input from the keypads. It can also be useful in lower level applications – to fix errors in storage media or to fix network transmission errors. The precision and the influence of such techniques can vary based on requirements and the capabilities of the correction technique but they essentially form a part of the application for its effective functioning. The research primarily focuses on various techniques to provide error correction given the location of the erroneous token. The errors are essentially Erasures which are missing bits in a stream of binary data, the locations of which are known. The basic idea behind these techniques lies in building up contextual information from an error-free training corpora and using these models, provide alternative suggestions which could replace the erroneous tokens. We look into two models - the topic-based LDA (Latent Dirichlet Allocation) model and the N-Gram model. We also propose an efficient mechanism to process such errors which offers exponential speed-ups. Using these models, we are able to achieve up to 5% improvement in accuracy as compared to a standard word distribution model using minimal domain knowledge

    Investigation of Ultrasonics as a tool for energy efficient recycling of Lactic acid from postconsumer PLA products

    Get PDF
    The growing use of ecofriendly, biodegradable polymers have created a need for a suitable recycling technique because, unlike petroleum derived plastics, their properties deteriorate during conventional recycling. These new techniques must be cost efficient and yield material properties same as virgin polymer. This research investigates the effectiveness of high-power ultrasonics as an efficient technique to recover lactic acid from postconsumer polylactic acid (PLA) products. Polylactic acid is a commercially available bioplastic derived from corn starch and/or sugar cane that is biorenewable and compostable (biodegradable). The various ongoing researches to recover lactic acid from PLA employ a common platform of high temperature, high pressure (HTHP) to effect polymer hydrolysis. The energy intensiveness of these HTHP processes prompted this work to investigate ultrasonics as an low energy alternative process to cause PLA depolymerization. The energy consumption and the time required for depolymerization were utilized as the metrics to quantify and compare depolymerization enhanced by ultrasonics with hot-bath technique. The coupled effect of catalysts concentration and different solvents, along with ultrasonic were studied based on preliminary trial results. In addition, the correlation between the rates of de-polymerization was analyzed for ultrasonic amplitude, treatment time, and catalyst concentration and types. The results indicate that depolymerization of PLA was largely effected by heating caused by ultrasonic-induced cavitations. Other effects of ultrasonics, namely cavitations and acoustic streaming, were shown to have minimal effects in enhancing depolymerization. In fact, thermal energy predominately affected the reaction kinetics; the heat introduced by conventional method (i.e., electrical heaters) was more efficient than ultrasonic heating in terms of energy (for depolymerization) per unit mass of PLA and depolymerizing time. The degree of crystallinity also was an important factor that affected the reaction kinetics of depolymerization. It was found that amorphous PLA de-polymerized faster compared to semi-crystalline PLA under the same conditions. While the depolymerization of PLA was anticipated to require 15 to 30 minutes, or extreme conditions [40], it was determined that with K2CO3 or NaOH catalysts and methanol media as the conditions, PLA could be fully de-polymerized within a few minutes. This information provided insight for effective pathways for the depolymerization of PLA, reducing the environmental impact of material use on the environment. The effects of the ultrasonics were modeled with finite element analysis based on fundamental concepts. The predictions from the modeling were confirmed by studying real-time streaming and fluid flow inside the treatment cell utilizing particle image Velocimetry (PIV). The FEA (finite element analysis) models of ultrasonic streaming were verified and were in reasonable agreement with the experimental values, validating simple assumptions for future researchers

    Optimum Size of Nanorods for Heating Application

    Full text link
    Magnetic nanoparticles (MNP's) have become increasingly important in heating applications such as hyperthermia treatment of cancer due to their ability to release heat when a remote external alternating magnetic field is applied. It has been shown that the heating capability of such particles varies significantly with the size of particles used. In this paper, we theoretically evaluate the heating capability of rod-shaped MNP's and identify conditions under which these particles display highest efficiency. For optimally sized monodisperse particles, the power generated by rod-shaped particles is found to be equal to that generated by spherical particles. However, for particles which have a dispersion in size, rod-shaped particles are found to be more effective in heating as a result of the greater spread in the power density distribution curve. Additionally, for rod-shaped particles, a dispersion in the radius of the particle contributes more to the reduction in loss power when compared to a dispersion in the length. We further identify the optimum size, i.e the radius and length of nanorods, given a bi-variate log-normal distribution of particle size in two dimensions

    A Power Efficient Server-to-Server Wireless Data Center Network Architecture Using 60 GHz Links

    Get PDF
    Data Centers have become the digital backbone of the modern society with the advent of cloud computing, social networking, big data analytics etc. They play a vital role in processing a large amount of information generated. The number of data centers and the servers present in them have been on the rise over the last decade. This has eventually led to the increase in the power consumption of the data center due to the power-hungry interconnect fabric which consists of switches, routers and switching fabric necessary for communication in the data center. Moreover, a major portion of the power consumed in a data center belongs to cooling infrastructure. The data center’s complex cabling prevents the heat dissipation by obstructing the air flow resulting in the need for a cooling infrastructure. Additionally, the complex cabling in traditional data centers poses design and maintenance challenges. In this work, these problems of traditional data centers are addressed by designing a unique new server-to-server wireless Data Center Network (DCN) architecture. The proposed design methodology uses 60GHz unlicensed millimeter-wave bands to establish direct communication links between servers in a DCN without the need for a conventional fabric. This will reduce the power consumption of the DCN significantly by getting rid of the power-hungry switches along with an increase in the independency in communication between servers. In this work, the previous traffic models of a data center network are studied and a new traffic model very similar to the actual traffic in a data center is modeled and used for simulating the DCN environment. It is estimated that the proposed DCN architecture’s power consumption is lowered by six to ten times in comparison to the existing conventional DCN architecture. Having established the power model of a server-to-server wireless DCN in terms of its power consumption, we demonstrate that such a power-efficient wireless DCN can sustain the traffic requirements encountered and provide data rates that are comparable to traditional DCNs. We have also compared the efficiency and performance of the proposed DCN architecture with some of the other novel DCN architectures like DCell, BCube with the same traffic
    corecore